Robust Subspace Clustering via Thresholding
نویسندگان
چکیده
منابع مشابه
Robust Subspace Clustering via Thresholding Ridge Regression
In this material, we provide the theoretical analyses to show that the trivial coefficients always correspond to the codes over errors. Lemmas 1–3 show that our errors-removing strategy will perform well when the lp-norm is enforced over the representation, where p = {1, 2,∞}. Let x 6= 0 be a data point in the union of subspaces SD that is spanned by D = [Dx D−x], where Dx and D−x consist of th...
متن کاملLearning Robust Subspace Clustering
We propose a low-rank transformation-learning framework to robustify subspace clustering. Many high-dimensional data, such as face images and motion sequences, lie in a union of low-dimensional subspaces. The subspace clustering problem has been extensively studied in the literature to partition such highdimensional data into clusters corresponding to their underlying low-dimensional subspaces....
متن کاملRobust Subspace Clustering
Subspace clustering refers to the task of finding a multi-subspace representation that best fits a collection of points taken from a high-dimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [25] to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysi...
متن کاملSubspace Clustering with Irrelevant Features via Robust Dantzig Selector
This paper considers the subspace clustering problem where the data contains irrelevant or corrupted features. We propose a method termed “robust Dantzig selector” which can successfully identify the clustering structure even with the presence of irrelevant features. The idea is simple yet powerful: we replace the inner product by its robust counterpart, which is insensitive to the irrelevant f...
متن کاملRobust Regression via Hard Thresholding
We study the problem of Robust Least Squares Regression (RLSR) where several response variables can be adversarially corrupted. More specifically, for a data matrix X ∈ Rp×n and an underlying model w∗, the response vector is generated as y = XTw∗+b where b ∈ R is the corruption vector supported over at most C ·n coordinates. Existing exact recovery results for RLSR focus solely on L1-penalty ba...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Information Theory
سال: 2015
ISSN: 0018-9448,1557-9654
DOI: 10.1109/tit.2015.2472520